Goto

Collaborating Authors

 Audit


Anomaly Detection in Double-entry Bookkeeping Data by Federated Learning System with Non-model Sharing Approach

Mashiko, Sota, Kawamata, Yuji, Nakayama, Tomoru, Sakurai, Tetsuya, Okada, Yukihiko

arXiv.org Artificial Intelligence

Anomaly detection is crucial in financial auditing and effective detection often requires obtaining large volumes of data from multiple organizations. However, confidentiality concerns hinder data sharing among audit firms. Although the federated learning (FL)-based approach, FedAvg, has been proposed to address this challenge, its use of mutiple communication rounds increases its overhead, limiting its practicality. In this study, we propose a novel framework employing Data Collaboration (DC) analysis -- a non-model share-type FL method -- to streamline model training into a single communication round. Our method first encodes journal entry data via dimensionality reduction to obtain secure intermediate representations, then transforms them into collaboration representations for building an autoencoder that detects anomalies. We evaluate our approach on a synthetic dataset and real journal entry data from multiple organizations. The results show that our method not only outperforms single-organization baselines but also exceeds FedAvg in non-i.i.d. experiments on real journal entry data that closely mirror real-world conditions. By preserving data confidentiality and reducing iterative communication, this study addresses a key auditing challenge -- ensuring data confidentiality while integrating knowledge from multiple audit firms. Our findings represent a significant advance in artificial intelligence-driven auditing and underscore the potential of FL methods in high-security domains.


Auditing of AI: Legal, Ethical and Technical Approaches

Mokander, Jakob

arXiv.org Artificial Intelligence

AI auditing is a rapidly growing field of research and practice. This review article, which doubles as an editorial to Digital Societys topical collection on Auditing of AI, provides an overview of previous work in the field. Three key points emerge from the review. First, contemporary attempts to audit AI systems have much to learn from how audits have historically been structured and conducted in areas like financial accounting, safety engineering and the social sciences. Second, both policymakers and technology providers have an interest in promoting auditing as an AI governance mechanism. Academic researchers can thus fill an important role by studying the feasibility and effectiveness of different AI auditing procedures. Third, AI auditing is an inherently multidisciplinary undertaking, to which substantial contributions have been made by computer scientists and engineers as well as social scientists, philosophers, legal scholars and industry practitioners. Reflecting this diversity of perspectives, different approaches to AI auditing have different affordances and constraints. Specifically, a distinction can be made between technology-oriented audits, which focus on the properties and capabilities of AI systems, and process oriented audits, which focus on technology providers governance structures and quality management systems. The next step in the evolution of auditing as an AI governance mechanism, this article concludes, should be the interlinking of these available (and complementary) approaches into structured and holistic procedures to audit not only how AI systems are designed and used but also how they impact users, societies and the natural environment in applied settings over time.


Advancing AI Audits for Enhanced AI Governance

Ema, Arisa, Sato, Ryo, Hase, Tomoharu, Nakano, Masafumi, Kamimura, Shinji, Kitamura, Hiromu

arXiv.org Artificial Intelligence

As artificial intelligence (AI) is integrated into various services and systems in society, many companies and organizations have proposed AI principles, policies, and made the related commitments. Conversely, some have proposed the need for independent audits, arguing that the voluntary principles adopted by the developers and providers of AI services and systems insufficiently address risk. This policy recommendation summarizes the issues related to the auditing of AI services and systems and presents three recommendations for promoting AI auditing that contribute to sound AI governance. Recommendation1.Development of institutional design for AI audits. Recommendation2.Training human resources for AI audits. Recommendation3. Updating AI audits in accordance with technological progress. In this policy recommendation, AI is assumed to be that which recognizes and predicts data with the last chapter outlining how generative AI should be audited.


Risk-limiting Financial Audits via Weighted Sampling without Replacement

Shekhar, Shubhanshu, Xu, Ziyu, Lipton, Zachary C., Liang, Pierre J., Ramdas, Aaditya

arXiv.org Artificial Intelligence

We introduce the notion of a risk-limiting financial auditing (RLFA): given $N$ transactions, the goal is to estimate the total misstated monetary fraction~($m^*$) to a given accuracy $\epsilon$, with confidence $1-\delta$. We do this by constructing new confidence sequences (CSs) for the weighted average of $N$ unknown values, based on samples drawn without replacement according to a (randomized) weighted sampling scheme. Using the idea of importance weighting to construct test martingales, we first develop a framework to construct CSs for arbitrary sampling strategies. Next, we develop methods to improve the quality of CSs by incorporating side information about the unknown values associated with each item. We show that when the side information is sufficiently predictive, it can directly drive the sampling. Addressing the case where the accuracy is unknown a priori, we introduce a method that incorporates side information via control variates. Crucially, our construction is adaptive: if the side information is highly predictive of the unknown misstated amounts, then the benefits of incorporating it are significant; but if the side information is uncorrelated, our methods learn to ignore it. Our methods recover state-of-the-art bounds for the special case when the weights are equal, which has already found applications in election auditing. The harder weighted case solves our more challenging problem of AI-assisted financial auditing.


Federated Continual Learning to Detect Accounting Anomalies in Financial Auditing

Schreyer, Marco, Hemati, Hamed, Borth, Damian, Vasarhelyi, Miklos A.

arXiv.org Artificial Intelligence

The International Standards on Auditing require auditors to collect reasonable assurance that financial statements are free of material misstatement. At the same time, a central objective of Continuous Assurance is the'real-time' assessment of digital accounting journal entries. Recently, driven by the advances in artificial intelligence, Deep Learning techniques have emerged in financial auditing to examine vast quantities of accounting data. However, learning highly adaptive audit models in decentralised and dynamic settings remains challenging. It requires the study of data distribution shifts over multiple clients and time periods. In this work, we propose a Federated Continual Learning framework enabling auditors to learn audit models from decentral clients continuously. We evaluate the framework's ability to detect accounting anomalies in common scenarios of organizational activity. Our empirical results, using real-world datasets and combined federatedcontinual learning strategies, demonstrate the learned model's ability to detect anomalies in audit settings of data distribution shifts.


RESHAPE: Explaining Accounting Anomalies in Financial Statement Audits by enhancing SHapley Additive exPlanations

Müller, Ricardo, Schreyer, Marco, Sattarov, Timur, Borth, Damian

arXiv.org Artificial Intelligence

Detecting accounting anomalies is a recurrent challenge in financial statement audits. Recently, novel methods derived from Deep-Learning (DL) have been proposed to audit the large volumes of a statement's underlying accounting records. However, due to their vast number of parameters, such models exhibit the drawback of being inherently opaque. At the same time, the concealing of a model's inner workings often hinders its real-world application. This observation holds particularly true in financial audits since auditors must reasonably explain and justify their audit decisions. Nowadays, various Explainable AI (XAI) techniques have been proposed to address this challenge, e.g., SHapley Additive exPlanations (SHAP). However, in unsupervised DL as often applied in financial audits, these methods explain the model output at the level of encoded variables. As a result, the explanations of Autoencoder Neural Networks (AENNs) are often hard to comprehend by human auditors. To mitigate this drawback, we propose (RESHAPE), which explains the model output on an aggregated attribute-level. In addition, we introduce an evaluation framework to compare the versatility of XAI methods in auditing. Our experimental results show empirical evidence that RESHAPE results in versatile explanations compared to state-of-the-art baselines. We envision such attribute-level explanations as a necessary next step in the adoption of unsupervised DL techniques in financial auditing.


NYC Targets Artificial Intelligence Bias in Hiring Under New Law

#artificialintelligence

New York City has a new law on the books--one of the boldest measures of its kind in the country--that aims to curb hiring bias that can occur when businesses use artificial intelligence tools to screen out job candidates. Employers in the city will be banned from using automated employment decision tools to screen job candidates, unless the technology has been subject to a "bias audit" conducted a year before the use of the tool. The New York City Council passed the measure on Nov. 10. Without the signature from Mayor Bill de Blasio, it "lapses" into law after 30 days, which falls on Friday. The mayor said he supports the law.


BQE Software Launches First Voice-Enabled Artificial Intelligence Feature for Accounting Firms

#artificialintelligence

Our goal with Core Intelligence is to provide a better business experience with less administrative work, and artificial intelligence, powered by Core, makes that possible,


AI startup founder Solon Angel thrives amid chaos

#artificialintelligence

The word "chaos" comes up often in an interview with Solon Angel, founder and chief strategy officer of Mindbridge Analytics Inc. He grew up in the Caribbean and France, where he dealt with discrimination, family instability and violence. Today the Ottawa-based entrepreneur says he's "comfortable" with the other kind of chaos, that which comes naturally in a tech startup. After the 2008 global financial crisis, Mr. Angel aimed to come up with something that might keep it from happening again. In 2015, he launched Mindbridge with the goal of transforming the financial auditing business with machine learning, or artificial intelligence.


Independent Auditor's Report

AI Magazine

The Board of Directors American Association For Artificial Intelligence Menlo Park, California We have audited the statement of financial position of American Association for Artificial Intelligence as of December 31, 1996 and the related statements of activities, changes in net assets and cash flows for the year then ended. These financial statements are the responsibility of the Association's management. Our responsibility is to express an opinion on these financial statements based on our audits. We conducted our audits in accordance with generally accepted auditing standards. Those standards require that we plan and perform the audit to obtain reasonable assurance about whether the financial statements are free of material misstatement.